Annals of Emerging Technologies in Computing (AETiC)

 
Table of Contents

         Table of Contents (Volume #9, Issue #3)


 
Cover Page

         Cover Page (Volume #9, Issue #3)


 
Editorial

         Editorial (Volume #9, Issue #3)


 
Paper #1                                                                             

Comparative Study of ML-Based Diabetes Detection Using IoT and Lab Data in Fog

Edmira Xhaferra, Florije Ismaili and Elda Cina


Abstract: Diabetes, as a chronic condition affecting millions of people worldwide, requires early diagnosis and continuous monitoring to prevent complications. The rise of machine learning (ML) applications in healthcare offers promising approaches for diagnosing and managing diabetes more effectively. Machine learning models can analyse extensive amounts of data to identify patterns that may be invisible to human clinicians, improving diagnosis accuracy and enabling personalized care. This study investigates the performance of four machine learning models Decision Tree, Logistic Regression, Random Forest, and Support Vector Machine (SVM) in detecting diabetes using two types of data: traditional lab-based data and real-time accessed data from Internet of Things (IoT) sensors. Data was collected from continuous glucose monitors (CGMs) and wearables, as well as clinical lab records in Albania. The results revealed that machine learning models applied to IoT data significantly outperformed those applied to lab data, demonstrating higher accuracy and better predictive metrics. The continuous monitoring enabled by IoT devices allows for real-time detection of glucose fluctuations, providing earlier and more precise diabetes diagnosis. Additionally, integrating IoT with fog computing reduces latency and enhances on-time decision-making, allowing for prompt interventions in patient care. The study highlights the transformative potential of combining IoT, machine learning, and fog computing to revolutionize healthcare, particularly the management of chronic diseases such as diabetes. The findings suggest that IoT-based systems should be adopted to improve diabetes detection and monitoring, allowing for a shift toward proactive healthcare solutions. Future research could explore the application of these technologies for managing other chronic conditions and optimizing machine-learning models for large-scale datasets.


Keywords: Continuous Glucose Monitoring; Diabetes Detection; Fog Computing; IoT; Machine Learning models.


Download Full Text


 
Paper #2                                                                             

Investigating the Accuracy of the GPT2 Algorithm in Classifying Identified Targets for an Intelligent Virtual Assistant

Shangying Guo and Jing Zhao


Abstract: Natural Language Understanding (NLU) is a branch of Natural Language Processing (NLP) that focuses on enabling computers to interpret human language with a level of understanding comparable to humans. NLU encompasses several tasks, including parsing sentences to understand grammatical structure, identifying word and phrase meanings, and determining user intent from natural language inputs. Many AI systems today such as chatbots and virtual assistants rely on NLU to accurately interpret and respond to user inputs in real time. This study addresses the challenge of accurately classifying user intents in multilingual intelligent virtual assistants a task critical for enhancing real-time human-computer interaction, by exploring the application of seven GPT-2 based models, leveraging their embedding matrices and tokenizers to design a robust intent-classification framework. The variation in the GPT-2 models in this study lies in the number of final layers and dimensional configurations used for classification. Through a large-scale case study with over one million utterances in 51 languages, the models were evaluated based on key metrics such as Accuracy, Precision, Recall, and F1-Score. Findings indicate that the GPT-256 model consistently achieved the highest values across these metrics, establishing it as the most accurate among the models tested. The GPT-256256 and GPT-128128 models followed closely, both of which showed competitive performance but with slightly lower accuracy than GPT-256. These results underscore the effectiveness of specific model configurations in improving NLU for virtual assistants, particularly in multilingual applications. The findings provide insight into optimizing AI systems for accurate goal classification, enhancing the ability of virtual assistants to understand and respond to diverse user inputs more precisely across languages, making them highly adaptable for global applications.


Keywords: Classification; GPT2; Natural Language Processing; Natural Language Understanding; Transformer; Virtual Assistant.


Download Full Text


 
Paper #3                                                                             

Deep Learning and Transformers Accuracy in Rumor Detection on Social Media

Long Yu, Jiarui Dai, Jiaqi Dai and Yanan Wang


Abstract: The increasing popularity of social media platforms has revolutionized how news and information are shared. While these social platforms facilitate rapid dissemination, they also provide fertile ground for the proliferation of rumors and unverified information. False information spreads as quickly as accurate news, often influencing public opinion and decision-making processes. Identifying rumors early is critical to limiting their potential harm and mitigating negative consequences. This study evaluates the practical application and scalability of transformer-based models, specifically GPT-2, in detecting rumors on social media platforms alongside traditional deep learning (DL) models. We explore various deep learning models such as Long Short-Term Memory (LSTM), Convolutional Neural Networks (CNN), ALBERT, and GPT-2. Performance was assessed using standard evaluation metrics, including accuracy, precision, recall, F1-score, and analysis of Receiver Operating Characteristic (ROC) curves. The comparative results reveal that transformer-based approaches significantly outperform traditional DL models in detecting rumors with higher accuracy and reliability. Among the evaluated models, GPT-2 achieved the highest scores across all performance metrics, demonstrating exceptional capability in identifying and predicting rumor-laden content. This study introduces key innovations, including a direct comparative analysis of transformer-based and traditional DL models, highlighting GPT-2 s advanced attention mechanisms that capture nuanced linguistic and contextual features. Additionally, it underscores GPT-2 s scalability for real-world misinformation mitigation and critically examines dataset biases and adaptability challenges. Future advancements, such as multimodal approaches integrating text, images, and videos, as well as hybrid models combining transformers with traditional DL techniques, are proposed to enhance detection accuracy and efficiency. These findings underline the transformative potential of advanced AI techniques in combating misinformation on social media platforms. The research emphasizes the potential for scalable and practical implementation of GPT-2-based tools in mitigating false information dissemination, contributing to a more reliable and resilient digital ecosystem. This work advances the understanding of AI's role in mitigating the spread of false information.


Keywords: Deep Learning (DL); GPT-2 Model; Rumor Detection; Social Media Misinformation; Transformer-Based Models.


Download Full Text


 
Paper #4                                                                             

A Systematic Literature Review on Virtual Reality Applications in Medical Education

William Wasswa and Jonathan Andrew Ware


Abstract: Virtual Reality is increasingly used in medical education, allowing students to experience clinical scenarios safely. This immersive approach holds significant potential for improving knowledge retention, procedural competence, and decision-making skills. This study critically analyses key VR technologies and their applications in medical training, focusing on identifying scalable and context-appropriate solutions for resource-limited settings. The goal is to provide actionable insights for African researchers, educators, and policymakers on how VR can be adapted and integrated into local medical education systems. A systematised literature review was conducted, and PubMed, Google Scholar, Scopus, and Science Direct were searched for studies on VR in medical education. The search used the keywords VR combined with Medical Education. Studies unrelated to healthcare were excluded. Additionally, a custom web-mining algorithm identified VR technologies from companies, academia, and research organisations. Fifty papers and 30 VR technologies met the selection criteria. Several platforms, including Goggleminds, Immersion, Avantis, and HoloLens, were identified as having been customised for medical training. Universities have adopted these platforms to establish VR labs, enhancing medical education. Findings indicate that VR is realistic, feasible, impactful, and applicable, supporting technical and non-technical skill training. Beyond education, VR is also used in digital diagnostics, virtual therapy, surgery planning, patient care, and treatment management. VR is an emerging technology revolutionising medical education by enabling interactive learning and safer clinical training. It improves healthcare workers' performance and enhances patient safety. However, successful VR adoption depends on more than just technology. Policymakers must ensure equitable access to ICT infrastructure, develop supportive policies, and promote research and innovation in VR applications for healthcare..


Keywords: Augmented Reality; Healthcare; Medical Education; Virtual Reality.


Download Full Text


 
Paper #5                                                                             

CNPMap: A Novel Approach for Ontology Alignment Capturing Beyond-Neighbourhoods Semantic Similarities

Abderrahmane Messous and Fatiha Barigou


Abstract: Ensuring semantic interoperability between heterogeneous systems remains a challenging task due to the structural complexity and diversity of ontological representations. Traditional ontology alignment methods often focus on local features, overlooking important semantic relationships beyond direct neighbourhoods. Here, we introduce CNPMap, a novel alignment approach that addresses this limitation by capturing non-local semantic similarities using a critical node-based partitioning strategy. CNPMap operates in three stages. First, it generates an initial alignment using a hybrid linguistic similarity measure. Then, a graph-based partitioning method exploits the Critical Node Detection Problem (CNDP) to divide ontologies into semantically coherent components. Finally, a context-aware similarity enhancement phase refines the alignments using a sigmoid function that modulates similarities based on both partition-level and entity-level relationships. We evaluated CNPMap on the OAEI 2023 Conference track. The approach improved the F-measure on several ontology pairs by 3% to 6% compared to baseline lexical matchers. For instance, the F-measure increased from 0.69 to 0.74 on the cmt conference pair and from 0.76 to 0.82 on the cmt sigkdd pair. CNPMap also achieved a precision of 0.75, outperforming most participating systems. However, its recall was slightly lower due to the conservative threshold used during the initial alignment phase. Our study reveals that integrating partition-based context into similarity computation significantly improves alignment quality, especially for complex ontologies. Future enhancements will focus on improving recall through adaptive thresholds and learning-based parameter tuning.


Keywords: CNPMap; Critical Nodes Detection Problem; Graph matching; Ontology matching; Ontology partitioning; Similarity enhancement.


Download Full Text


 
Paper #6                                                                             

Optimized Lightweight CNN for Error Action Recognition in Physical Education Teaching

Shu Zhang and Jacklyn Anne D. Toldoya


Abstract: With the improvement of living standards and the enhancement of people's health awareness, participation in sports activities has received widespread attention. Traditional sports equipment, especially in educational environments, lacks the necessary technological advancements to provide precise guidance. The gap between this demand and available resources often leads to incorrect teaching methods, which may hurt students' sports training. To address these challenges, this paper proposes a motion action recognition system utilizing lightweight convolutional neural networks (CNN). This method effectively reduces the noise of sensor data, improves the accuracy and reliability of data, and lays a solid foundation for model training through one-dimensional median filtering and Z-score standardization. Optimize the CNN architecture by adjusting key parameters such as network structure, convolution kernel size, and convolution stride, which are fine-tuned based on training data to maximize the model's recognition ability. The research results provide valuable insights into the effectiveness of teaching techniques and targeted feedback for improving sports training. After sufficient training, the system performed excellently on test data, accurately identifying erroneous movements across various sports actions, particularly in critical areas such as stroke movements, with an accuracy rate of up to 97.82% and an RMSE as low as 1.71%. These results demonstrate the model's high precision and robustness. The system has shown great potential in addressing the current shortage of professional coaches by providing automatic, real-time feedback on motion accuracy.


Keywords: CNN; Error Action Recognition; Lightweight Deep Learning; Sports Teaching; Sports Training.


Download Full Text

 
 International Association for Educators and Researchers (IAER), registered in England and Wales - Reg #OC418009                        Copyright IAER 2025